Image-based head swapping task aims to stitch a source head to another source body flawlessly. This seldom-studied task faces two major challenges: 1) Preserving the head and body from various sources while generating a seamless transition region. 2) No paired head swapping dataset and benchmark so far. In this paper, we propose an image-based head swapping framework (HS-Diffusion) which consists of a semantic-guided latent diffusion model (SG-LDM) and a semantic layout generator. We blend the semantic layouts of source head and source body, and then inpaint the transition region by the semantic layout generator, achieving a coarse-grained head swapping. SG-LDM can further implement fine-grained head swapping with the blended layout as condition by a progressive fusion process, while preserving source head and source body with high-quality reconstruction. To this end, we design a head-cover augmentation strategy for training and a neck alignment trick for geometric realism. Importantly, we construct a new image-based head swapping benchmark and propose two tailor-designed metrics (Mask-FID and Focal-FID). Extensive experiments demonstrate the superiority of our framework. The code will be available: https://github.com/qinghew/HS-Diffusion.
translated by 谷歌翻译
视频快照压缩成像(SCI)使用计算成像的概念通过单个测量捕获了多个顺序视频帧。基本原理是通过不同的遮罩调节高速框架,这些调制帧求和到由低速2D传感器捕获的单个测量值(称为光学编码器);此后,如果需要,使用算法来重建所需的高速帧(配音软件解码器)。在本文中,我们考虑了视频SCI中的重建算法,即从压缩测量中恢复一系列视频帧。具体而言,我们提出了一个时空变压器(STFORMER)来利用空间和时间域中的相关性。 stformer网络由令牌生成块,视频重建块组成,这两个块由一系列的stformer块连接。每个STFORMER块由空间自我注意分支,时间自我发项处和这两个分支的输出组成,由融合网络集成。对模拟和真实数据的广泛结果证明了Stformer的最新性能。代码和模型可在https://github.com/ucaswangls/stformer.git上公开获得
translated by 谷歌翻译
在过去的几年中,360 {\ deg}摄像机在过去几年中越来越受欢迎。在本文中,我们提出了两种基本技术-360 {\ deg}图像中的对象检测的视野IOU(fov-iou)和360Augmentation。尽管大多数专为透视图像设计的对象检测神经网络适用于EquiretectAffular投影(ERP)格式的360 {\ deg}图像,但由于ERP图像中的失真,它们的性能会恶化。我们的方法可以很容易地与现有的透视对象检测器集成在一起,并显着改善了性能。 FOV-iou计算球形图像中两个视野边界框的交叉点,该框可用于训练,推理和评估,而360augmentation是一种数据增强技术,特定于360 {\ deg}对象检测任务随机旋转球形图像并由于球体对平面投影而解决偏差。我们在具有不同类型的透视对象检测器的360室数据集上进行了广泛的实验,并显示了我们方法的一致有效性。
translated by 谷歌翻译
专家混合物(MOE)由于其成功提高了模型质量,特别是在变压器方面的成功而变得流行。通过向几个专家提供稀疏门的令牌,每个专家只包含完整模型的一部分,Moe将模型尺寸保持不变,并且显着降低了每次标记计算,从而有效地缩放神经网络。但是,我们发现,目前的联合训练专家和稀疏门的方法引入了对模型精度的负面影响,缩短了昂贵的大规模模型训练的效率。在这项工作中,我们提出了用于MOE训练的密集至稀疏的门(DTS-Gate)。具体而言,代替使用永久稀疏门,DTS-Gate开始作为向所有专家路由令牌的密集栅极开始,然后逐渐和自适应地成为稀疏,而路线较少到更少的专家。与DTS-Gate的Moe自然地通过培训所有专家训练专家和稀疏门的训练,然后学习稀疏门。实验表明,与GPT-MOE(1.5B)模型中的最先进的开关门相比,使用OpenWeBtext数据集(40GB),DTS-Gate可以获得2.0倍的加速以达到相同的验证困惑,如以及更高的拖鞋 - 效率为1.42倍的加速。
translated by 谷歌翻译
数据稀疏问题是自然语言理解(NLU)的关键挑战,特别是对于新的目标域。通过在源域中训练NLU模型并直接将模型应用于任意目标域(即使没有微调),很少拍摄的NLU对缓解数据稀缺问题至关重要。在本文中,我们建议改进具有矢量投影距离和抽象三角条件随机场(CRF)的原型网络,用于几次射击NLU。向量投影距离利用在标签向量上的上下文词嵌入的投影作为单词标签相似度,其等同于归一化的线性模型。抽象三角CRF了解用于联合意图分类和插槽填充任务的域名忽视标签转换。广泛的实验表明,我们所提出的方法可以显着超越强力基线。具体而言,我们的方法可以在中文和英语中达到两次拍摄的两次拍摄NLU基准(几个关节和剪辑)的新技术,而无需对目标域的微调。
translated by 谷歌翻译
通过深度学习(DL)大大扩展了数据驱动故障诊断模型的范围。然而,经典卷积和反复化结构具有计算效率和特征表示的缺陷,而基于注意机制的最新变压器架构尚未应用于该字段。为了解决这些问题,我们提出了一种新颖的时变电片(TFT)模型,其灵感来自序列加工的香草变压器大规模成功。特别是,我们设计了一个新的笨蛋和编码器模块,以从振动信号的时频表示(TFR)中提取有效抽象。在此基础上,本文提出了一种基于时变电片的新的端到端故障诊断框架。通过轴承实验数据集的案例研究,我们构建了最佳变压器结构并验证了其故障诊断性能。与基准模型和其他最先进的方法相比,证明了所提出的方法的优越性。
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
In contrast to the control-theoretic methods, the lack of stability guarantee remains a significant problem for model-free reinforcement learning (RL) methods. Jointly learning a policy and a Lyapunov function has recently become a promising approach to ensuring the whole system with a stability guarantee. However, the classical Lyapunov constraints researchers introduced cannot stabilize the system during the sampling-based optimization. Therefore, we propose the Adaptive Stability Certification (ASC), making the system reach sampling-based stability. Because the ASC condition can search for the optimal policy heuristically, we design the Adaptive Lyapunov-based Actor-Critic (ALAC) algorithm based on the ASC condition. Meanwhile, our algorithm avoids the optimization problem that a variety of constraints are coupled into the objective in current approaches. When evaluated on ten robotic tasks, our method achieves lower accumulated cost and fewer stability constraint violations than previous studies.
translated by 谷歌翻译